38 research outputs found

    Analysis of Inter-Chip Communication Patterns on Multi-Core Distributed Shared-Memory Computers

    Get PDF
    Multi-core multi-socket distributed shared-memory com- puters (DSM computers, for short) have become an impor- tant node architecture in scientific computing as they provide substantial computational capacity with relatively low space and power requirements. Compared to conventional computer networks, inter-chip networks used in DSM computers feature higher bandwidth, lower latency and tighter integration with the CPU. The inter-chip network is a shared resource among the user application and many other services, which can lead to consid- erable variation of execution times of identical communication tasks. In this work, we explore traffic patterns resulting from MPI collective communication primitives and investigate the ques- tion whether inter-chip link load is a reliable indicator and predictor for the execution time of collective communication primitives on a DSM computer. Our experiments on a Sun Fire X4600 M2 DSM computer with 32 cores (eight quad-core CPUs) indicate that specific single link loads are positively correlated with the execution time of MPI ALLREDUCE. Ob- serving patterns over multiple links allows refinement of the single-link observation

    On Mixed Precision Iterative Refinement for Eigenvalue Problems

    Get PDF
    AbstractWe investigate novel iterative refinement methods for solving eigenvalue problems which are derived from Newton's method. In particular, approaches for the solution of the resulting linear system based on saddle point problems are compared and evaluated. The algorithms presented exploit the performance benefits of mixed precision, where the majority of operations are performed at a lower working precision and only critical steps within the algorithm are computed in a higher target precision, leading to a solution which is accurate to the target precision. A complexity analysis shows that the best novel method presented requires fewer floating point operations than the so far only existing iterative refinement eigensolver by Dongarra, Moler and Wilkinson

    Resilient gossip-inspired all-reduce algorithms for high-performance computing - Potential, limitations, and open questions

    Get PDF
    We investigate the usefulness of gossip-based reduction algorithms in a high-performance computing (HPC) context. We compare them to state-of-the-art deterministic parallel reduction algorithms in terms of fault tolerance and resilience against silent data corruption (SDC) as well as in terms of performance and scalability. New gossip-based reduction algorithms are proposed, which significantly improve the state-of-the-art in terms of resilience against SDC. Moreover, a new gossip-inspired reduction algorithm is proposed, which promises a much more competitive runtime performance in an HPC context than classical gossip-based algorithms, in particular for low accuracy requirements.This work has been partially funded by the Spanish Ministry of Science and Innovation [contract TIN2015-65316]; by the Government of Catalonia [contracts 2014-SGR-1051, 2014-SGR-1272]; by the RoMoL ERC Advanced Grant [grant number GA 321253] and by the Vienna Science and Technology Fund (WWTF) through project ICT15-113.Peer ReviewedPostprint (author's final draft

    Scalable Resilience Against Node Failures for Communication-Hiding Preconditioned Conjugate Gradient and Conjugate Residual Methods

    Full text link
    The observed and expected continued growth in the number of nodes in large-scale parallel computers gives rise to two major challenges: global communication operations are becoming major bottlenecks due to their limited scalability, and the likelihood of node failures is increasing. We study an approach for addressing these challenges in the context of solving large sparse linear systems. In particular, we focus on the pipelined preconditioned conjugate gradient (PPCG) method, which has been shown to successfully deal with the first of these challenges. In this paper, we address the second challenge. We present extensions to the PPCG solver and two of its variants which make them resilient against the failure of a compute node while fully preserving their communication-hiding properties and thus their scalability. The basic idea is to efficiently communicate a few redundant copies of local vector elements to neighboring nodes with very little overhead. In case a node fails, these redundant copies are gathered at a replacement node, which can then accurately reconstruct the lost parts of the solver's state. After that, the parallel solver can continue as in the failure-free scenario. Experimental evaluations of our approach illustrate on average very low runtime overheads compared to the standard non-resilient algorithms. This shows that scalable algorithmic resilience can be achieved at low extra cost.Comment: 12 pages, 2 figures, 2 table

    libNMF -- A Library for Nonnegative Matrix Factorization

    Get PDF
    We present libNMF -- a computationally efficient high performance library for computing nonnegative matrix factorizations (NMF) written in C. Various algorithms and algorithmic variants for computing NMF are supported. libNMF is based on external routines from BLAS (Basic Linear Algebra Subprograms), LAPack (Linear Algebra package) and ARPack, which provide efficient building blocks for performing central vector and matrix operations. Since modern BLAS implementations support multi-threading, libNMF can exploit the potential of multi-core architectures. In this paper, the basic NMF algorithms contained in libNMF and existing implementations found in the literature are briefly reviewed. Then, libNMF is evaluated in terms of computational efficiency and numerical accuracy and compared with the best existing codes available. libNMF is publicly available at http://rlcta.univie.ac.at/software

    Attacking Graph Neural Networks with Bit Flips: Weisfeiler and Lehman Go Indifferent

    Full text link
    Prior attacks on graph neural networks have mostly focused on graph poisoning and evasion, neglecting the network's weights and biases. Traditional weight-based fault injection attacks, such as bit flip attacks used for convolutional neural networks, do not consider the unique properties of graph neural networks. We propose the Injectivity Bit Flip Attack, the first bit flip attack designed specifically for graph neural networks. Our attack targets the learnable neighborhood aggregation functions in quantized message passing neural networks, degrading their ability to distinguish graph structures and losing the expressivity of the Weisfeiler-Lehman test. Our findings suggest that exploiting mathematical properties specific to certain graph neural network architectures can significantly increase their vulnerability to bit flip attacks. Injectivity Bit Flip Attacks can degrade the maximal expressive Graph Isomorphism Networks trained on various graph property prediction datasets to random output by flipping only a small fraction of the network's bits, demonstrating its higher destructive power compared to a bit flip attack transferred from convolutional neural networks. Our attack is transparent and motivated by theoretical insights which are confirmed by extensive empirical results

    Non-Redundant Graph Neural Networks with Improved Expressiveness

    Full text link
    Message passing graph neural networks iteratively compute node embeddings by aggregating messages from all neighbors. This procedure can be viewed as a neural variant of the Weisfeiler-Leman method, which limits their expressive power. Moreover, oversmoothing and oversquashing restrict the number of layers these networks can effectively utilize. The repeated exchange and encoding of identical information in message passing amplifies oversquashing. We propose a novel aggregation scheme based on neighborhood trees, which allows for controlling the redundancy by pruning branches of the unfolding trees underlying standard message passing. We prove that reducing redundancy improves expressivity and experimentally show that it alleviates oversquashing. We investigate the interaction between redundancy in message passing and redundancy in computation and propose a compact representation of neighborhood trees, from which we compute node and graph embeddings via a neural tree canonization technique. Our method is provably more expressive than the Weisfeiler-Leman method, less susceptible to oversquashing than message passing neural networks, and provides high classification accuracy on widely-used benchmark datasets

    Adaptive Inner-Outer Inverse Iteration for Eigenvector Computations

    No full text
    Inverse iteration is a standard technique for finding selected eigenvectors associated with eigenvalues which are known approximately. At each step the solution of a linear system of equations is required, which is usually done by factorizing the system matrix. When direct factorization is impractical, iterative methods can be applied for solving the linear systems. Several results about termination criteria for the inner iteration of this inner-outer iterative scheme have been published. Our experiences indicate that they are too stringent in practice. As an alternative an adaptive algorithm is motivated and proposed. The inner tolerance is decreased only when required. A detailed discussion of implementation details of the inner-outer iterative approach is given. Results with prototype implementations have been very promising: Adaptively selecting the tolerance in the inner iteration results in significant reductions in the computational effort compared to other approaches. 1 Introdu..
    corecore